Unified Robust Training for Graph Neural Networks Against Label Noise
نویسندگان
چکیده
Graph neural networks (GNNs) have achieved state-of-the-art performance for node classification on graphs. The vast majority of existing works assume that genuine labels are always provided training. However, there has been very little research effort how to improve the robustness GNNs in presence label noise. Learning with noise primarily studied context image classification, but these techniques cannot be directly applied graph-structured data, due two major challenges—label sparsity and dependency—faced by learning In this paper, we propose a new framework, UnionNET, noisy graphs under semi-supervised setting. Our approach provides unified solution robustly training performing correction simultaneously. key idea is perform aggregation estimate node-level class probability distributions, which used guide sample reweighting correction. Compared works, UnionNET appealing advantages. First, it requires no extra clean supervision, or explicit estimation transition matrix. Second, framework proposed train an end-to-end manner. Experimental results show our approach: (1) effective improving model against different types levels noise; (2) yields significant improvements over baselines.
منابع مشابه
Robust Loss Functions under Label Noise for Deep Neural Networks
In many applications of classifier learning, training data suffers from label noise. Deep networks are learned using huge training data where the problem of noisy labels is particularly relevant. The current techniques proposed for learning deep networks under label noise focus on modifying the network architecture and on algorithms for estimating true labels from noisy labels. An alternate app...
متن کاملToward Robustness against Label Noise in Training Deep Discriminative Neural Networks
Collecting large training datasets, annotated with high-quality labels, is costly and time-consuming. This paper proposes a novel framework for training deep convolutional neural networks from noisy labeled datasets that can be obtained cheaply. The problem is formulated using an undirected graphical model that represents the relationship between noisy and clean labels, trained in a semisupervi...
متن کاملHow Do Neural Networks Overcome Label Noise?
This work provides an analytical expression for the effect of label noise on the performance of deep neural networks. (a) 5 of MNIST’s 10 classes, with clean labels (b) 20% Random Noise, 100% Network Prediction Accuracy (c) 20% Randomly Spread Flip Noise, 100% Accuracy (d) 20% Locally Concentrated Noise, 80% Accuracy Figure 1: Different types of random label noise. DNNs are extremely resistant ...
متن کاملAn Effective Approach for Robust Metric Learning in the Presence of Label Noise
Many algorithms in machine learning, pattern recognition, and data mining are based on a similarity/distance measure. For example, the kNN classifier and clustering algorithms such as k-means require a similarity/distance function. Also, in Content-Based Information Retrieval (CBIR) systems, we need to rank the retrieved objects based on the similarity to the query. As generic measures such as ...
متن کاملClassification on Soft Labels Is Robust against Label Noise
In a scenario of supervised classification of data, labeled training data is essential. Unfortunately, the process by which those labels are obtained is not error-free, for example due to human nature. The aim of this work is to find out what impact noise on the labels has, and we do so by artificially adding it. An algorithm for the noising procedure is described. Not only individual classifie...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2021
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-030-75762-5_42